Maybe, maybe you can share it through the chat.
Yeah, sure.
I'm just going to read and then just I'm going to text in the chat.
So the question is, considering the potential risks and limitations, could be safely and
I have a very straightforward question and it's that judges should not rely on any AI system as an exclusive source for taking any decision.
So they should not rely on the outputs of AI systems for taking decisions.
AI systems are never infallible.
People who work with machine learning and large language models, they are ongoing attempts to make these systems more precise, but they're never 100% reliable.
Always certain percentage of answers or outputs that will simply not fit what the person is looking for.
That's one thing.
The second thing is that legal matters almost always require certain value judgment and AI systems are not moral systems.
They are not agents that have, let's say, beliefs or that have an understanding of what's good or what's bad, what's fair, what's unfair.
They do not reason like human beings.
These systems are things and inasmuch as judges require to make values of this, of whatever is at stake to take a decision, then AI systems will not replace humans.
And the outputs of AI systems should be used as a source of information that must be critically assessed as it occurs with other sources of information that judges might use to take decisions.
So I hope this answers the question.
Thank you.
I think this was a very interesting food for thought at the very beginning.
And then we stop the recording.
Presenters
Zugänglich über
Offener Zugang
Dauer
00:03:13 Min
Aufnahmedatum
2024-11-30
Hochgeladen am
2024-11-30 09:16:15
Sprache
en-US